|
MLLWR Menu haut
Pre-deployment Common Law Duty of Care and Article 36 Obligations in relation to Autonomous Weapons: Interface between Domestic Law and International Humanitarian Law?
OZLEM ULGEN*
Visiting Fellow, Lauterpacht Centre for International Law, University of Cambridge; Senior Lecturer in Law, School of Law, Birmingham City University, UK
|
Table of Contents
- Introduction
- Mapping Out the Pre-deployment Duty of Care in relation to Autonomous Weapon
- The Smith Judgment
- Pre-deployment Duty of Care
- Is there a Problem of Foreseeability?
- Time and Place as Determining Factors
- Potential Limitations on the Standard of Care
- Construing Combat Immunity Narrowly
- Private Actors and Pre-deployment Duty of Care
- The BT Drone Strikes Case
- Product Liability
- Specific Pre-deployment Duties in relation to Autonomous Weapons
- Duty during Development and Testing
- Duty to Train in Use of Supporting Technologies and Equipment
- Duty to Properly Equip with Supporting Technologies, Devices, and Equipment
- Duty of Ongoing Review
- Pre-deployment Weapons Review Obligation for Autonomous Weapons
- Lifecycle Stages and Operational Factors of Autonomous Weapons
- Demonstration Stage Reliability and IHL Compliance
- Manufacture Stage Specification and Quality Control
- In-service Stage Ongoing Review
- Ensuring Target Recognition Performs to Intended Function
- Ensuring Human Control over Weapon Release or Authorisation
- Ensuring Correct Employment against Identified Enemy Targets
- Conclusion
|
1
2
3
4
5
5
6
6
7
7
9
10
10
10
11
11
11
12
13
13
13
13
14
14
14
|
I. Introduction
In an age of high-tech and remotely controlled warfare, new weapons are being developed to operate autonomously (i.e. without human input or oversight) in relation to the critical functions of acquiring, tracking, selecting, and attacking targets. While debates continue as to whether autonomous weapons are ethical or legal,
*
A draft of this paper was originally presented at the conference on International Law of Military Operations: Mapping the Field, Exeter University, 21–23 June 2016. I am grateful for the anonymous reviewer’s comments as well as comments received by conference organisers and delegates.
Back
1
pre-deployment obligations in relation to their use exist in both domestic law and international humanitarian law (IHL). First, in Smith and Others v. MOD (2013) the UK Supreme Court held that combat immunity does not apply in circumstances where military operations or acts take place before actual deployment or active combat. This raises interesting questions about types of pre-deployment activities which may be subject to a duty of care on the part of the State towards members of their own armed forces and others. Would Government decisions in the development, testing, and procurement of autonomous weapons attract a duty of care and fall outside the doctrine of combat immunity? Could this duty of care extend to civilians injured as a result of inadequate pre-deployment due diligence of autonomous weapons? Second, Article 36 of Additional Protocol I to the Geneva Conventions (API) provides a pre-deployment review procedure for new weapons. At the pre-deployment stage where a new weapon is being developed and tested, Article 36 requires continuous assessment of whether normal or expected use of the weapon is prohibited by API or any other rule of international law, including principles of legitimate targeting, proportionality and unnecessary suffering. Could this be a duty of due diligence? What does it entail? How is it to be enforced?
This article explores the interface between a pre-deployment common law duty of care and Article 36 pre-deployment review procedure in relation to autonomous weapons. It considers whether State and private actors (e.g. manufacturers of autonomous weapons; and telecommunications companies) involved in pre-deployment activities owe a duty of care to combatants and civilians. Part II examines the UK Supreme Court’s judgment in Smith and Others v. MOD to identify the basis upon which a pre-deployment common law duty of care can be established and extended to pre-deployment activities relating to autonomous weapons. Part III maps out the nature of a pre-deployment duty of care in relation to autonomous weapons and the content of specific duties. Part IV then examines the nature of the pre-deployment review procedure under Article 36 of API, and the extent to which it establishes a legally enforceable obligation in relation to autonomous weapons. While Article 36 does not require international supervision, it does necessitate domestic implementation of the obligation to review. Consideration is given as to whether it can be enforced at the domestic level by combatants and civilians.
II. Mapping Out the Pre-deployment Duty of Care in relation to Autonomous Weap-ons
How could a pre-deployment duty of care arise for use of autonomous weapons? As these weapons have varying degrees of autonomy, it is important to match the level of autonomy of such weapons with their functionality, to determine the scope of any duty. The key feature of autonomy is that, whether to some extent (or even fully), it removes human involvement, or more specifically, “human central thinking activities”.
1
There are three main categories of autonomous weapons: (i) remotely controlled, which involves levels of automation and human input throughout the target and attack process (e.g. unmanned armed aerial vehicles (UAV), such as General Atomics MQ-9 Reaper, are remotely operated by human pilots but have autonomy in terms of navigational decisions even though targeting actions are performed manually by remote pilots);
2
(ii) semi-autonomous, which involves greater autonomy in “acquiring, tracking, and identifying potential targets; cueing potential targets to human operators; prioritizing selected targets; timing of when to fire; or providing terminal guidance to home in on selected targets”,
3
with human input for the attack decisions; and (iii) fully autonomous, which involves higher levels of independent thinking as regards acquiring, tracking, selecting and attacking targets, without the need for human input.
4
Human-in-the-loop means the weapon is directly controlled and operated by a human, whereas human-on-the-loop means the weapon is monitored by a human. Fully autonomous weapons would function entirely independently without a human in or on the-loop. Taking
1
O. Ulgen, ‘Human Dignity in an Age of Autonomous Weapons: Are We in Danger of Losing an “Elementary Consideration of Humanity”?’, Vol. 8 No. 9 ESIL Conference Paper Series 2016, pp. 1–19, pp. 7–8,
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2912002 (forthcoming in OUP edited collection - updated copy with author). Unless indicated otherwise, all URLs cited were last accessed on 5 August 2017. See also, O. Ulgen, ‘Autonomous UAV and Removal of Human Central Thinking Activities: Implications for Legitimate Targeting, Proportionality, and Unnecessary Suffering’ (forthcoming), pp. 1–45.
Back
2
Joint Doctrine Note 2/11, The UK Approach to Unmanned Aircraft Systems (30 March 2011), Annex A.6-A.7.
Back
3
United States of America Department of Defense Directive, Autonomy in Weapons Systems, No. 3000.09 (21 November 2012) 14.
Back
4
See generally, Ulgen (forthcoming), supra note 1.
Back
2
the critical functions of acquiring, tracking, selecting, and attacking (ATSA) targets in any autonomous weapon, we can start to see what may be the content of any pre-deployment duty (see Appendix).
This Part considers the significance of Smith and Others v. MOD (2013) in setting parameters for the existence and content of a pre-deployment duty of care. It discusses issues relating to the problem of foreseeability in autonomous weapons, time and place as determining factors of a pre-deployment duty, potential limitations on pre-deployment duty, and construing combat immunity narrowly. It considers how private actors (e.g. manufacturers of autonomous weapons) may be involved in pre-deployment activities and attract a pre-deployment duty of care to combatants and civilians.
1. The Smith Judgment
Smith and Others v. MOD (2013)
5
concerned two sets of common law negligence claims against the British Ministry of Defence: the “Challenger claims” related to the death and injury of servicemen due to “friendly fire” from one of their own tanks, and the “Ellis claim” concerned death and injury of servicemen from IED attacks on their vehicles. Both claims alleged breach of a duty of care by:
failure to ensure that the claimants’ tank and the tanks of the battle group that fired on it were proper-ly equipped with the technology and equipment that would have prevented the incident. That equip-ment falls into two categories: target identity devices that provide automatic confirmation as to whether a vehicle is a friend or foe; and situational awareness equipment that permits tank crews to locate their position and direction of sight accurately … failing to provide soldiers with adequate recognition training pre-deployment and also in theatre.
6
The MOD argued that the claims should “all be struck out on the principle of combat immunity” and pleaded that “it would not be fair, just or reasonable to impose a duty of care on the MOD in the circumstances of those cases.”
7
The majority judgment (Hope, Walker, Hale, and Kerr) held that there was a case to answer, but the matter should be determined at trial in evidence. As an employer, the MOD was deemed to have a duty of care, but the extent of its duty and whether it was breached had to be determined at trial.
8
The doctrine of combat immunity did not apply to the Challenger claims because:
At the stage when men are being trained, whether pre-deployment or in theatre, or decisions are being made about the fitting of equipment to tanks or other fighting vehicles, there is time to think things through, to plan and to exercise judgment. These activities are sufficiently far removed from the pressures and risks of active operations against the enemy for it to not to be unreasonable to expect a duty of care to be exercised, so long as the standard of care that is imposed has regard to the nature of these activities and to their circumstances.
9
The minority judgments of Mance and Wilson opined largely on the basis of policy and national security considerations that there was no duty of care in either the Challenger or Ellis claims, and combat immunity applied:
… It is not difficult to identify situations in which the common law has concluded on policy grounds that no duty of care should exist.
10
… Policy decisions concerning military procurement and training involve predictions as to uncertain future needs, the assessment and balancing of multiple risks and the setting of difficult priorities for the often enormous expenditure required, to be made out of limited resources. They are often highly
controversial and not infrequently political in their nature. These may well also be influenced by considerations of national security which cannot openly be disclosed or discussed.
11
… the Challenger claims … should be struck out in their entirety on the basis that the state owes no such duty of care as alleged with regard to the provision of technology, equipment or training to avoid death or injury in the course of an active military operation. Similarly, with regard to the Ellis claim in negligence, … there was no such duty of care as alleged regarding the provision of different or differently equipped vehicles …
12
5
Smith and Others v. MOD (2013) UKSC 41.
Back
6
Id., §§ 9–12.
Back
7
Id., § 13.
Back
8
Id., §§ 81 and 98.
Back
9
Id., § 95.
Back
10
Id., § 118.
Back
11
Id., § 128.
Back
12
Id., § 136.
Back
3
Lord Carnwath’s separate dissent agreed with Mance and Wilson on the Challenger claims, but differed in relation to Ellis in finding that there may be a possible duty of care and combat immunity may apply:
we should apply different considerations to the later Snatch [Ellis] claims. They occurred in July 2005 and February 2006, after the time (May 2003) when (as Lord Hope explains: para 1) “major combat operations ceased and were replaced by a period of military occupation” … If as I believe the policy reasons for excluding liability are related to the special features of war or active hostilities, it would be wrong in my view to apply the same approach to peace-keeping operations, however intrinsically dangerous … It is alleged, as I understand, that there was an unjustified failure, following earlier incidents, to take readily available steps to deal with a known and preventable risk. I would not regard such claims as necessarily excluded as a matter of general policy, either at common law or under article 2. Since all the issues will now have to be considered at trial, it is unnecessary and probably undesirable for me to say more.
13
Lords Mance, Wilson and Carnwath, rejecting the existence of any duty of care on grounds of national security, opined there could be no “middle ground” of potential liability for preparation for or conduct of hostilities.
14
But Lord Carnwath’s acceptance of the possibility for establishing a duty of care in post-combat peace-keeping operations in the Ellis claim supports the majority’s approach that the MOD is not exempt from a duty of care and combat immunity is not absolute. The majority focused on the need to construe combat immunity narrowly while Carnwath focused on principles of negligence.
15
What is the trial judge to make of the Supreme Court judgment? Does it represent unnecessary “judicialisation of war” as some commentators contend?
16
Or is it a timely and necessary reminder that duties flow from committing troops and weapons to combat? The door to judicial scrutiny is certainly not firmly shut as the majority distinguishes pre-deployment activities from combat activities. Even the minority (in the form of Lord Carnwath’s dissent) recognises different phases of combat activities will require different treatment and perhaps attach legal responsibility. The pre-deployment phase is of particular relevance to new weapons technology, such as autonomous weapons, where there is time to develop weapons performance and capability with an intended and foreseeable harm. So what is the general pre-deployment duty of care emanating from the case? And how can it be construed for autonomous weapons?
2. Pre-deployment Duty of Care
Legal authority in tort law points to restricted duties owed by the police and emergency services, and these cases are cited as analogous to the military. In Hill v. Chief Constable of West Yorkshire it was held that for reasons of public policy the police owed no actionable duty of care to a victim in the course of investigating crime.
17
In Van Colle v. Chief Constable of the Hertfordshire Police a majority of the House of Lords held that the same principle applied even where the police were aware of a specific threat to an individual witness.
18
Just like any other employer, the police and emergency services owe a duty of care towards their employees, and there is no special rule in English law qualifying the obligations of others towards fire fighters, police officers, and ambulance technicians. Although the work of such public servants is inherently dangerous, they accept the risks but not the risks which the exercise of reasonable care on the part of their employers could avoid.
19
In distinguishing inherent risks from avoidable risks through reasonable care, the Court of Appeal in King v. Sussex Ambulance Service NHS Trust dismissed a claim related to injuries sustained by an ambulance technician, who was required in the course of an emergency call to help in carrying a patient downstairs.
20
Two categories of duties exist in the military: one is a general employer’s duty and the other is a pre-deployment duty. In between these two areas sit military procurement and authorisation decisions which, given their wide ramifications, should be subject to some form of scrutiny. But these are considered high policy
13
Id., §§ 187–188.
Back
14
Id., §§ 104, 148, 159, 161, 186.
Back
15
Id., §§ 89, 90, 92, 156–157, 161–166.
Back
16
J. Morgan, ‘Military Negligence: Reforming Tort Liability after Smith v. Ministry of Defence’ (Paper presented to the House of Commons Defence Select Committee, November 2013); J. Morgan, ‘Negligence: into Battle’, Cambridge Law Journal 2013, pp. 14–17.
Back
17
Hill v. Chief Constable of West Yorkshire (1989) AC 53. See also Z v. United Kingdom (2001) 34 EHRR 97, which accepted the role of public policy in determining the limits of liability.
Back
18
Van Colle v. Chief Constable of the Hertfordshire Police (2009) AC 225.
Back
19
King v. Sussex Ambulance Service NHS Trust (2002) ICR 1413 per Hale LJ, §§ 21–23.
Back
20
Id.
Back
4
matters for the Government and generally not justiciable before the courts. Under the employer’s duty, the MOD, as employer of servicemen and women, has a general duty to provide a safe system of work to the extent possible under the circumstances. This generally applies to circumstances away from the theatre of operation. Due to the inherently dangerous nature of war and for policy reasons, activities during combat are not justiciable and therefore no duty arises. Thus, decisions taken by commanders on the ground are operational matters left to their judgment rather than judicial scrutiny. Further, a soldier does not owe his fellow soldier a duty of care during combat.
The pre-deployment duty relates to pre-deployment activities that are so far removed from the actual theatre of operation that there is time to deliberate, consider alternatives, and foresee damage. The three-part test in Caparo for establishing a duty of care (reasonably foreseeable loss; relationship of sufficient proximity; and fair, just and reasonable)
21
is satisfied for pre-deployment activities involving equipping and training personnel for combat. It is reasonably foreseeable that unless military personnel are properly and adequately trained and equipped, they will face injury or death which might have been preventable. There is an obvious relationship of proximity between personnel and the MOD, and it is just, fair, and reasonable to expect a reasonable level of duty of care in preparing and equipping personnel for combat.
22
A pre-deployment duty is also relevant to autonomous weapons where there are humanly-controllable issues of design capability, functionality, and operational and performance review. These are explored in more detail in Part III.
A. Is there a Problem of Foreseeability?
Some argue there are two factors which make establishing liability for autonomous weapons problematic. First, the unpredictability of advanced artificial intelligence systems that learn from environmental data and keep on learning in ways unforeseen by designers. Second, autonomous weapons having causal agency without legal agency; autonomous artificial agents “act” on their own but are not legally accountable.
23
These factors appear to fall foul of the first limb of the Caparo test; a reasonably foreseeable loss. But arguably there is foresight in design capability for levels of autonomy and functionality in autonomous weapons (i.e. what the designer intends the weapon to do and to what extent) (see Appendix). At the fully autonomous level, the designer clearly intends the weapon to independently perform all four ATSA critical functions. The harm that can result from the wrong target being selected and attacked is reasonably foreseeable. So we come back to the pre-deployment phase of designer and manufacturer duty of care. Lack of foresight may be a problem where the weapon is relying solely on its own operational experience database for further learning without any human input or oversight. But this is an extreme scenario and one which would not meet the requirement of ongoing review under Article 36 of API. So necessarily, at this point, there is human involvement. The problem occurs with “dynamic learning systems” where the machine refers to its historical data to adapt its function over time.
24
This introduces uncertainty and unpredictability as to how the machine will behave beyond what the designer intended.
B. Time and Place as Determining Factors
Time to deliberate and remoteness from combat activity are central to the pre-deployment duty of care. In Smith, the court identified these two elements as: (1) there is time to think things through, to plan and to exercise judgment; and (2) the activities are sufficiently far removed from the pressures and risks of active operations.
25
Lord Hope considered that, “at the stage when men are being trained, whether pre-deployment or in theatre, or decisions are being made about the fitting of equipment to tanks or other fighting vehicles, there is time to think things through, to plan and to exercise judgment.”
26
Such activities are “sufficiently far removed from the pressures and risks of active operations” that it would not be unreasonable to expect a duty
21
Caparo Industries plc v. Dickman (1990) 2 AC 605.
Back
22
Id.
Back
23
P.M. Asaro, ‘The Liability Problem for Autonomous Artificial Agents’ (Association for the Advancement of Artificial Intelli-gence, 2015).
Back
24
P.M. Asaro, ‘Roberto Cordeschi on Cybernetics and Autonomous Weapons: Reflections and Responses’, No. 3 Paradigmi. Rivista di critica filosofica 2015, pp. 83–107, pp. 96–98.
Back
25
Smith, supra note 5, § 95.
Back
26
Id.
Back
5
of care to be exercised.
27
Taken at its broadest, any activity that involves time to think, plan and exercise judgment, and which is remote from actual combat may attract a duty of care. According to Lord Hope, it is easier to determine a breach where failure is attributed to pre-deployment decisions about training or equipment when there is time to assess risks to life that have to be planned for, than where decisions are attributable to what was taking place in theatre.
28
The more constrained a decision-maker is by prior policy decisions taken from a high level of command, or by the effects of contact with the enemy, the more difficult it will be to find that the decision-taker in theatre was at fault.
29
C. Potential Limitations on the Standard of Care
In Smith the court stated that the pre-deployment standard of care is determined by what is reasonable in the circumstances, having regard to the nature of activities.
30
The impression is that the standard is an open-ended assessment developing over time. But there are important considerations which may restrict the content of the standard and, ultimately, prevent liability being established. In determining a reasonable standard, courts must take account of the following considerations: (i) they should be slow to review operational decisions; (ii) even if decisions about procurement are taken remote from the battlefield they will not always be appropriate for review; (iii) subjecting the operations of the military while on active service to close scrutiny risks undermining the State’s ability to defend itself; (iv)
failures of a systemic kind or preventative operational measures cannot be excluded from scrutiny; and (v) a duty should not impose an unrealistic or disproportionate burden on the authorities.
31
These are all relevant and important considerations indicating the court does not intend to set precedent for automatic judicial scrutiny. The national security risk of scrutinising “operations of the military while on active service” reinforces the divide between non-justiciable active combat scenarios and justiciable pre-deployment scenarios, and the need to avoid establishing a duty that places an “unrealistic or disproportionate” burden. But the court is also not prepared to accept an open-and-shut case of blanket combat immunity or that all decisions necessarily fall within active combat scenarios. Certain types of failures are so serious that they cannot be excluded from scrutiny. Reference to procurement decisions “not always” being appropriate for review suggests there may be instances where it is appropriate to review.
D. Construing Combat Immunity Narrowly
Combat immunity is part of customary international law as enshrined in Article 1 of the Hague Regulations annexed to the Hague Convention IV on the Laws and Customs of War on Land, 1907, which forms part of the common law. It is a common law defence in tort claims and applies during the conduct of military operations so that the State and its military forces are not under an actionable duty of care to avoid causing loss or injury to combatants or civilians.
32
Combat immunity does not apply in circumstances where military operations or acts take place before actual deployment or active combat. Thus, Government decisions about procurement of equipment and training of soldiers would fall outside combat immunity, although these have been deemed non-justiciable matters for national security reasons. Although in Smith the judges diverged on the application of combat immunity to the particular case, they all agreed that combat immunity should be construed narrowly. The majority held that although combat immunity is generally available as a defence under common law, there is a distinction between pre-deployment activities and actual combat activities and the doctrine is limited in its application to the latter. This limitation is justified on the grounds that time and distance from actual combat activity enables thinking, planning and exercise of judgment and, therefore, a greater standard of care can be expected.
The case authorities on combat immunity prior to Smith suggest a broad approach is favoured. Starting with Shaw Savill & Albion Co Ltd v. Commonwealth Dixon J stated that combat immunity should extend to “all active operations against the enemy”.
33
Gibbs CJ in Groves v. Commonwealth stated that “to hold that there is
27
Id.
Back
28
Id., § 99.
Back
29
Id.
Back
30
Id., §§ 94–95.
Back
31
Id., §§ 64–66, 76–81, 99.
Back
32
See generally, Shaw Savill & Albion Co Ltd v. Commonwealth (1940) 66 CLR 344; Mulcahy v. MoD (1996) QB 732; Multiple Claimants v. Ministry of Defence (2003) EWHC 1134 (QB).
Back
33
Shaw Savill & Albion Co Ltd v. Commonwealth, id., 361.
Back
6
no civil liability for injury caused by negligence of persons in the course of an actual engagement with the enemy seems to me to accord with common sense and sound policy.”
34
Following this line of authority Mulcahy v. MoD, concerning the first Iraq war, established that one soldier does not owe a duty of care to another member of the armed forces when engaging the enemy in the course of hostilities.
35
In Multiple Claimants v. The Ministry of Defence, Owen J stated that combat immunity extends to all active operations against the enemy in which service personnel are exposed to attack or the threat of attack, including planning and preparation of operations in which armed forces may come under attack or meet armed resistance.
36
But these cases point to a narrowing of combat immunity to apply to situations where there is “active operations” or in the case of planning and preparation for armed forces to face “attack or meet armed resistance”. The extension of combat immunity to planning and preparation in Multiple Claimants v. The Ministry of Defence suggests all pre-deployment activities will be covered. But the qualifier is that the activity must expose armed forces to “attack or meet armed resistance” and not all pre-deployment activities will involve such exposure. Lord Hope in Smith criticised such an extension for not being supported by existing case law. He made clear that the extension as formulated by Owen J relates only to planning and preparation operations in which injury is sustained, and not to the planning and preparation in general for possible unidentified further operations.
37
One consequence from this is that combat immunity should not apply to pre-deployment activities where there is time to deliberate, consider alternatives, and foresee damage.
Applying the time and place elements of pre-deployment duty of care, the majority found that activities complained about in the Challenger claims (i.e. lack of target identity devices and situation awareness equipment, and lack of recognition training) fell outside the scope of combat immunity.
38
In relation to the Ellis claim, the majority held it related to a period in hostilities in which there was constant threat of enemy action by insurgents which was liable to cause death or injury, so some of the alleged failings may relate to decisions taken during active operations and may fall within combat immunity.
39
This was a matter to be determined at trial through evidence. But Lord Carnwath did not believe such a “middle ground” of potential liability existed.
40
Lords Mance and Wilson believed that the “middle ground” invited further litigation rather than preventing “judicialisation of war”, and it would present “real difficulties” for the court in having to adjudicate over matters “where the common law should not treat”.
41
3. Private Actors and Pre-deployment Duty of Care
States engaging in military operations overseas rely on a combination of State-owned or leased military bases and services of commercial contractors. Private sector contribution to military operations includes manufacture and supply of military equipment and weapons, telecommunications installation and technical support, and computer software programming and maintenance. In relation to autonomous weapons, core private commercial actors research and invest in new weapons technology, and other commercial actors provide telecommunications support systems for transmission of intelligence data and coordination of military operations. Such third party involvement may not be neutral or a simple business transaction considering it relates to use of autonomous weapons which may eventually cause injury and death. Could private actor
involvement in autonomous weapons establish an independent pre-deployment duty of care? Or could their involvement be attributed to the State?
A. The BT Drone Strikes Case
The 2011 OECD Guidelines for Multinational Corporations identifies business activities that may cause or contribute to adverse human rights impacts, and requires multinational enterprises to engage in effective due
34
Groves v. Commonwealth (1982) 150 CLR 113, § 3.
Back
35
Mulcahy v. MoD, supra note 32, p. 751, § 62, per Sir Ian Glidewell.
Back
36
Multiple Claimants v. Ministry of Defence, supra note 32, § 2.C.20.
Back
37
Smith, supra note 5, §§ 88–90, per Lord Hope.
Back
38
Id., § 95.
Back
39
Id., § 96.
Back
40
Id., § 186.
Back
41
Id., §§ 147–150.
Back
7
diligence practices.
42
A case in point is the BT Group plc contract with the US Government to supply fibre-optic communications cable between American military bases in the UK, Italy and Djibouti.
43
The cable connects RAF Croughton in Northamptonshire, UK (a US Air Force communications station) to Camp Lemonnier in Djibouti, East Africa (a US base for UAV operations undertaking “targeted killings” in Yemen and Somalia).
Of particular concern is the cable service start and end points and their connection to American UAV operations. The service start point is RAF Croughton, which processes one-third of all US military communications in Europe.
44
The network must go through Capodichino in Naples, Italy, which is the US communications hub for the US Navy’s European and African command centres. The US Navy has two teleports in Naples which connect satellite communications to the DISN. Teleports were designed to connect satellite users to core US defence computer systems and comprise part of a plan to ease network congestion caused by increasing numbers of drone missions sending streaming video back to the core DISN systems.
45
The service end point is Camp Lemonnier and the contract coincided with the Camp being officially designated a military base for US Overseas Contingency Operations; a term used for “War on Terror”.
46
The cable network must avoid going through India, Pakistan, Iran and Syria.
47
Section M.9(d)(8) provides that “KG-340 encryption devices will be provided by Government [i.e. US Government] at both ends of the circuit and will be managed by the Government”. These encryption devices are built to US National Security Agency certification standard for protecting top secret classified data.
48
Such contractual provisions, features and technical aspects show that supply of the communications cable is not for general use but specifically for American military and intelligence use.
49
BT is headquartered in the UK and listed on the London Stock Exchange. The UK is obliged to recommend that BT observes the OECD Guidelines. BT’s activities include entering the contract and supplying the service, which directly affects the human rights of Yemeni citizens. By contracting with DISA, an agency of the US Government, to provide a fibre-optic communications network enabling transmission of information between two American military bases, with the service end point being a military base known for drone operations undertaking “targeted killings” in Yemen, BT has assisted in violations of fundamental human rights of Yemeni citizens. The cable enables transmission of locational information of human targets which is then used by drones to launch attacks. By supplying the communications cable network used to transmit targeting information for drone strikes in Yemen, which have caused the death of terrorist suspects and civilians, BT may have contributed to arbitrary loss of life without due process, contrary to non-derogable rights to life, and to recognition as a person before the law. Certain types of Hellfire missiles used on UAV cause burning in targets and incineration of bodies.
50
Evidence of pre-contract and during contract drone strikes in Yemen involving burning of the human body, causing fear and collateral damage amongst the local population, may also put BT at risk of contributing to torture, cruel, inhuman or degrading treatment.
51
42
OECD Guidelines for Multinational Enterprises (25 May 2011). See Part I.IV Human Rights, §§ 1–6. (Respecting International Human Rights), Chapter IV Human Rights § 2 (Avoid Causing or Contributing to Adverse Human Rights Impacts), Part I.II General Policies, Commentary § 14 (Due Diligence).
Back
43
BT Contract Specification (2012), Sections M.13 and M.18,
http://www.theguardian.com/business/interactive/2014/aug/27/bt-drone-cable-contract-camp-lemonnier-djibouti.
Back
44
Reprieve Complaint to the UK National Contact Point under the Specific Instance Procedure of the OECD Guidelines for Multinational Enterprises in respect of BT plc (15 July 2013), p. 2.
Back
45
M. Ballard, ‘Analysis: How the UK Connects to the US Global Drone Network’, Computer Weekly (2 May 2014).
Back
46
Id.
Back
47
BT Contract, supra note 43, Section M.6.
Back
48
See SafeNet Product Brief 2011,
http://www.computerweekly.com/blogs/public-sector/SafeNet%20KG340%20NSA.pdf.
Back
49
BT Contract, supra note 43; Ballard supra note 45.
Back
50
See, for example, J. Cox, ‘The Yemeni Man Suing BT for America’s Deadly Drone Attacks’, Vice News (23 May 2014),
www.vice.com/en_uk/read/the-yemeni-man-suing-bt-for-american-drone-strikes (American UAV strike of 23 January 2013 killing four individuals, including two civilians); G. Greenwald, ‘Burning Victims to Death: Still a Common Practice’, The Intercept (4 February 2015),
https://theintercept.com/2015/02/04/burning-victims-death-still-common-practice/.
Back
51
Reprieve Complaint, supra note 44, pp. 4-5; The Bureau of Investigative Journalism data,
http://www.thebureauinvestigates.com/2012/05/08/yemen-reported-us-covert-action-2012/; Report submitted by the Alkarama Foundation to the UN Special Rapporteur on Counter-Terrorism, The United States’ War on Yemen: Drone Attacks (3 June 2013), pp. 19–20; S. Batati, ‘Death of Traumatised Boy Sparks Outrage at US Drone Strikes in Yemen’, Gulf News (21 March 2014),
http://gulfnews.com/news/gulf/yemen/death-of-traumatised-boy-sparks-outrage-at-us-drone-strikes-in-yemen-1.1306981.
Back
8
Prior to entering the contract, BT should have undertaken thorough human rights due diligence, including consultation with relevant stakeholders, to find out the use to which its products would be put and any potential risk of human rights abuses. It is hard for BT to argue it has not been aware of any drone strikes or American policy due to the widely available press reports of drone strikes in Pakistan, Somalia, Yemen, Libya and Afghanistan. The first drone strike took place in Yemen in 2002 and, after that, there have been persistent strikes in Pakistan’s north western territories with reports of civilian casualties. A large multinational enterprise such as BT has corporate compliance and media departments constantly reviewing public relations concerning its business dealings and contractual partners. This process would have picked up adverse publicity relating to American drone strikes, and BT would have been aware of the potential or actual risk of being associated with American policy.
B. Product Liability
Product liability for faulty, defective, and malfunctioning autonomous weapons may be one means of establishing a private actor pre-deployment duty of care.
52
In the UK, the Consumer Protection Act 1987 establishes civil liability for unsafe products under which the producer of an unsafe product is held strictly liable in damages for defects that cause damage.
53
Primary liability for defective products lies with producers,
54
but may include persons who market products under their own brand name, and importers.
55
“Damage” includes death or personal injury or any loss or damage to any property, including land.
56
“Defects” means the safety of the product is not such as persons “generally are entitled to expect” in the context of risks to damage to property, death or personal injury. As to how to determine what persons “generally are entitled to expect”, all circumstances must be taken into account including: (i) the manner in which, and purposes for which, the product has been marketed, its get-up, the use of any mark in relation to the product and any instructions for, or warnings with respect to, doing or refraining from doing anything with or in relation to the product; (ii) what might reasonably be expected to be done with or in relation to the product; and (iii) the time when the product was supplied by its producer to another.
57
In human-weapon integrated systems, such as semi-autonomous weapons, product liability could be used by military personnel. But could it also extend to semi-autonomous and fully autonomous weapons used against enemy combatants and civilians suffering injury or death in foreign territories? This may seem remote and unreasonable but the Rome II Regulation, which applies to non-contractual obligations arising out of damage caused by a product, provides potential for such an extension. The general principle under Article 4(1) is that the applicable law will be the law of the country in which the damage occurs, irrespective of the country in which the event giving rise to the damage occurred, and irrespective of the country or countries in which the indirect consequences of that event occur.
58
But under Article 4(3), if the tort or wrongdoing is “manifestly more closely connected with a[nother] country”, then the law of that country will apply.
59
Factors relevant to this assessment include any pre-existing relationship, such as a contract, the event and its consequences and the parties themselves.
60
Thus, in cases where a tort claim is closely connected to a contract between the parties,
52
See, for example, P.M. Asaro, ‘A Body to Kick, but Still No Soul to Damn: Legal Perspectives on Robotics’, in P. Lin, K. Ab-ney and G. Bekey (eds.), Robot Ethics: The Ethical and Social Implications of Robotics (Cambridge, Massachusetts & London, England, The MIT Press, 2012) pp. 170–176, discussing product liability in relation to robotics in the civilian sphere.
Back
53
Consumer Protection Act 1987 (UK) (1987 c 43).
Back
54
Id., s 2(2)(a).
Back
55
Id., s 2(2)(b) and s 2(2)(c).
Back
56
Id., s 5(1).
Back
57
Id., s 3(1), s 3(2)(a)–3(2)(c).
Back
58
Back
5958
European Parliament and Council Regulation (EC) 864/2007 (OJ L199, 31.7.2007, p. 40) (‘Rome II Regulation’), art 4(1).
Rome II Regulation, art 4(3).
Back
60
See art 4(3); K. Farmaner, ‘Liability & Multi-Party Accidents Abroad’, Vol. 167 NLJ 2017, p. 10; C.S.A. Okoli and G.O. Arishe, ‘The Operation of the Escape Clauses in the Rome Convention, Rome I Regulation and Rome II Regulation’, Vol. 8 No. 3 Journal of Private International Law 2012, pp. 513–545, pp. 535–541.
Back
9
Article 4(3) allows the applicable law in contract to apply to the tort claim.
61
Although Article 4(3) is referred to as an “escape clause”, and intended as an exception “where it is clear from all the circumstances of the case that the tort/delict is manifestly more closely connected with another country”,
62
an element of flexibility between general and exceptional rules is allowed in order to satisfy “the requirement of legal certainty and the need to do justice in individual cases”.
63
A British manufacturer of autonomous weapons supplying the military of another country, which then goes on to use such weapons in a third country causing injury and death, arguably falls under this provision. The technological know-how, and predictability and reliability testing for the weapon exists in the UK. If the applicable law of the contract to supply autonomous weapons is UK law, then there is a pre-existing relationship allowing for the same law to apply for the tort claim.
Another scenario may be the British manufacturer supplying autonomous weapons to the UK military authorised by the UK Government for use in conflict zones. There is a pre-existing relationship in contract between the manufacturer and the UK Government, and UK law applies so that any subsequent tort claim could be brought by those injured in foreign territories. The recent UK Supreme Court judgments in Belhaj and another (Respondents) v. Straw and others (Appellants) and Rahmatullah (No 1) (Respondent) v. Ministry of Defence and another (Appellants)
64
are encouraging in this respect. These are tort claims against the UK for alleged complicity in torture and irregular rendition in which the Court decided that defences of State immunity and/or foreign act of State doctrine could not apply.
III. Specific Pre-deployment Duties in relation to Autonomous Weapons
This Part identifies and explores the content of four specific pre-deployment duties in relation to autonomous weapons: (1) duty during development and testing; (2) duty to train in use of supporting technologies and equipment; (3) duty to properly equip with supporting technologies, devices, and equipment; and (4) duty of ongoing review.
1. Duty during Development and Testing
During the development and testing stage of autonomous weapons, designers and manufactures have time to think things through, to plan and exercise judgment in building autonomy levels and functionality. Working on instructions from civilian and military commanders, it would not be unreasonable to expect a duty of care in designing and manufacturing autonomy levels and functionality. Decisions on appropriate levels of autonomy and functionality are reached through a combination of engineering expertise, and civilian and military demands. These design, planning and decision-making activities are sufficiently far removed from the pressures and risks of active operations to render it reasonable to expect a duty of care to be owed. The detailed content of the duty will vary according to the level of weapon autonomy and functionality (see Appendix). Remotely controlled and semi-autonomous weapons duties may include ensuring human control or monitoring of ATSA functions; and properly functioning and IHL compliant technology that integrates human action with the weapon, human input attack command, and human input abort mechanism. Autonomous weapons duties may include ensuring properly functioning and IHL compliant autonomous ATSA functions, attack and abort mechanisms, and integration of a human review mechanism.
2. Duty to Train in Use of Supporting Technologies and Equipment
Whether a weapon is remotely controlled or semi or fully autonomous, technologies and equipment support its performance. A number of “interface errors” may arise in human-weapon integrated systems affecting actual performance. These are errors, failure to perform, and break down caused by miscommunication and mis-coordination between the machine and human operator.
65
Several duties arise from such scenarios. First,
61
See T.K. Graziano, ‘Freedom to Choose the Applicable Law in Tort - Article 14 and 4(3) of the Rome II Regulation’, in W. Binchy and J. Ahern (eds.), The Rome II Regulation on the Law Applicable to Non-Contractual Obligations: A New Tort Liti-gation Regime (Leiden & Boston, Martinus Nijhoff Publishers, 2009) pp. 113–132.
Back
62
Rome II Regulation, Recital 18.
Back
63
Rome II Regulation, Recital 14.
Back
64
Belhaj and another (Respondents) v. Straw and others (Appellants) (2017) UKSC 3 (joined with) Rahmatullah (No 1) (Respondent) v. Ministry of Defence and another (Appellants) (2017) UKSC 3.
Back
65
Asaro, supra note 24, pp. 83–107, pp. 90–91.
Back
10
human operators will need proper and adequate training in use of devices, equipment and technologies integrating human action with the weapon, especially attack and abort commands. In Smith, the Challenger claims related to a failure to provide soldiers with adequate pre-deployment and in theatre recognition training. Second, human operators would in turn have to ensure clear communication and coordination. Third, in case the communication link is lost or faulty, a back-up mechanism would need to be in place for the operator to activate in order to suspend or abort operations until such time communication is restored.
3. Duty to Properly Equip with Supporting Technologies, Devices, and Equipment
Actual technologies, devices, and equipment used to support the weapon would need to be provided. This becomes relevant where human direction and monitoring are involved so that operators would need to be adequately and properly equipped with devices, equipment and technologies necessary for carrying out ATSA functions, especially attack and abort commands. In Smith, the Challenger claims related to a failure to properly equip the claimants with specific technology and equipment that would have prevented death and injury. This included target identity devices providing automatic confirmation as to whether a vehicle is a friend or foe, and situational awareness equipment permitting tank crews to locate their position and direction of sight accurately.
4. Duty of Ongoing Review<
Remotely controlled, semi or fully autonomous weapons will need ongoing human review to ensure proper performance and functionality compliant with IHL. This process can also help identify and resolve “interface errors” of remotely controlled and semi-autonomous weapons, as well as problems of “dynamic learning systems” in fully autonomous weapons. The extent and timing of review will depend on the modifications and additions made to new technology, but there should also be a regular ongoing basis of review to capture any errors and performance problems.
IV. Pre-deployment Weapons Review Obligation for Autonomous Weapons
Article 36 of API provides:
In the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party.
This establishes a State obligation to review compliance of new weapons and their intended use with international law.
66
Arguably, State practice preceding the adoption of API creates a customary international law obligation for States not party to API (such as the US) to review the acquisition or use of new weapons’ compliance with rules of IHL.
67
More specifically, this means considering the legality of the use of such weapons in relation to provisions of the API, treaty law, and customary international law in relation to IHL principles of legitimate targeting and distinction, proportionality, precautions, and unnecessary suffering. The International Law Association Study Group on Challenges of 21st Century Warfare has called for reviews to integrate legal considerations during the design stage, and for States to develop robust policy guidelines to ensure compliance with IHL standards.
68
States must set up and implement their own weapons review systems which are not subject to international oversight or supervision. This does not mean that the weapons review obligation is intended as an entirely subjective standard, and States are responsible for any wrongful damage resulting from non-implementation
66
See generally, W.H. Boothby, Weapons and the Law of Armed Conflict (Oxford, Oxford University Press, 2016), pp. 343–355; J. McClelland, ‘The Review of Weapons in Accordance with Article 36 of Additional Protocol I’, Vol. 850 International Review of the Red Cross 2003, p. 397; I. Daoust, R. Coupland and R. Ishoey, ‘New Wars, New Weapons? The Obligation of
States to Assess the Legality of Means and Methods of Warfare’, Vol. 846 International Review of the Red Cross 2002, p. 345; ICRC, A Guide to the Legal Review of New Weapons, Means and Methods of Warfare: Measures to Implement Article 36 of Additional Protocol I of 1977 (2006).
Back
67
See Boothby, id., pp. 342–343, who refers to an “implied obligation” with examples of the US and Sweden conducting reviews since 1974.
Back
68
International Law Association, ‘Interim Report of the Study Group on Challenges of 21st Century Warfare: The Conduct of Hostilities and International Humanitarian Law’, 2014, p. 14,
http://www.ila-hq.org/index.php/study-groups?study-groupsID=58.
Back
11
of the review obligation.
69
Under Article 91 of API, any party to a conflict, whether the eventual victor or vanquished party, that violates provisions of the Geneva Conventions or API resulting in loss or damage will be liable to pay compensation.
70
But there are a number of concerns with how Article 36 currently operates.
States make determinations on the legality of new weapons, including existing weapons subject to upgrades or modifications changing their characteristics and capabilities, or importer States acquiring existing weapons for the first time; both cases falling into the category of “new” weapons.
71
The obligation only concerns the normal use of the weapon as seen at the time of the evaluation, whether it were to be used in some or all circumstances (and not possibly misused). If a State determines that a weapon is illegal, this does not by itself create a mandatory rule of international law vis-à-vis third parties, even for the State concerned, and there is no obligation for the State to make its determination public. Although a State’s determination of the legality of a new weapon is not internationally binding, it is intended that by being obliged to make such a determination, methods and means of warfare are not adopted without explaining issues of legality with care.
72
Article 36 does not require public disclosure of State review procedures and States parties are not obliged to reveal any information on new weapons being developed or manufactured.
73
State discretion in implementation allows protection of nationally sensitive weapons information but may result in divergent State practice with lack of transparency and enforceability.
74
Friendly States may collaborate, co-fund research and development, and share information during testing, but are not obliged to share review information with importer States. IHL-conscious States undertaking rigorous reviews may be placed at a disadvantage if their adversaries are willing to employ new weapons tested at low levels of combat effect reliability.
So, what does this mean in terms of making national determinations justiciable? What happens if there is a failure to adequately consider compliance with international law or after due diligence harm still occurs? Does the fact that a review has taken place, irrespective of its inadequacy, absolve the State party from any further responsibility? And if it does not, who has an interest in bringing a claim of State responsibility? According to Articles 36 and 91, it is a State v State action which must be brought for non-compliance, thus excluding the possibility of individual combatant or civilian claims. But this does imply an obligation to establish internal procedures with a view to elucidating the problem of illegality and, therefore, States parties can ask for information on this point.
75
Since the review obligation is on States parties to API, it is States as manufacturers or importers/exporters of new weapons that can be held responsible rather than private companies. Even if the weapons manufacturing is outsourced to a private company, the State is still obliged to review IHL compliance, which necessarily means legal input at the manufacturing stage.
1. Lifecycle Stages and Operational Factors of Autonomous Weapons
A realistic and manageable way of understanding the review process is through the lifecycle of the weapon and in this respect McClelland identifies three key stages: demonstration, manufacture, and in-service.
76
From a design and engineering perspective, Backstrom and Henderson identify three key operational factors: achieving correct target recognition, determining how to exercise weapon release authorisation, and
69
Y. Sandoz, C. Swinarski and B. Zimmermann (eds.), ICRC Commentary on the Additional Protocols of 8 June 1977 to the Geneva Conventions of 12 August 1949 (ICRC, 1987) (‘ICRC Commentary’), § 1466.
Back
70
Id., §§ 3652–3655.
Back
71
Id., § 1472. See also, Daoust et al, supra note 66, p. 352.
Back
71
ICRC Commentary, supra note 69, § 1469.
Back
73
Id., § 1481.
Back
74
See B. Rappert, R. Moyes, A. Crowe, and T. Nash, ‘The Roles of Civil Society in the Development of Standards around New Weapons and Other Technologies of Warfare’, Vol. 94 No. 886 International Review of the Red Cross 2012, p. 765; V. Boulanin, ‘Implementing Article 36 Weapon Reviews in the Light of Increasing Autonomy in Weapon Systems’, No. 2015/1 SIPRI Insights on Peace and Security November 2015,
http://books.sipri.org/files/insight/SIPRIInsight1501.pdf; Article 36, ‘Article 36 reviews and addressing Lethal Autonomous Weapons Systems’, Briefing Paper, April 2016,
http://www.article36.org/wp-content/uploads/2016/04/LAWS-and-A36.pdf.
Back
75
ICRC Commentary, supra note 69, § 1482.
Back
76
McClelland, supra note 66, p. 401.
Back
12
controlling (or limiting) the weapon effect.
77
For the review process to be meaningful it needs a multi-disciplinary approach combining engineering, legal, and operator expertise. Discussion of lifecycle stages and operational factors in relation to autonomous weapons will help us identify what specific review obligations may exist.
A. Demonstration Stage Reliability and IHL Compliance
Designing and testing an autonomous weapon that performs all or some of the critical functions will require investment and extensive testing. This stage initially assesses whether such weapons capability is viable, and then undertakes further testing to ensure reliability of critical functions. In engineering terms reliability means “the probability of correct functioning to a specified life (measured in time, cycles of operation etc) at a given confidence level”.
78
In legal terms, this means that the weapon’s critical functions relating to targeting, attack authorisation, and weapons effect should be IHL-compliant. A weapon could fail to meet IHL standards due to inadequate technical specification or a design flaw not identified during testing. Autonomous weapons with full spectrum critical functions will require rigorous testing which due to limited resources may be compromised. Backstrom and Henderson argue that the target recognition system on an autonomous weapon may require a high statistical confidence to minimise lethal weapon deployment on civilians, and if this cannot be achieved due to budgetary constraints “appropriate limits” should be set on the use of the weapon until field experience provides reliability confidence.
79
This seems a sensible approach but what are the “appropriate limits”? If the weapon identifies the target correctly but cannot make an assessment of collateral damage, should it be used? This may not be a problem with semi-autonomous weapons where there is human authorisation or intervention for attack allowing proportionality assessments to be made by a human. It may also not pose an issue for fully autonomous weapons where the independent attack decision and action can be disabled, allowing human control and oversight.
B. Manufacture Stage Specification and Quality Control
At the stage when the weapon is being produced there could be problems related to non-compliance with design specification, shortage of funding halting production or compromising on quality of materials and software used, and batch variation in the production line. These may contribute towards poor weapons performance or sub-standard specification performance. A multi-disciplinary approach would be necessary to ensure reliability and IHL compliance are followed through in manufacturing. Any deviations from the design specification have to be identified and an assessment made of their impact on reliability and IHL compliance.
C. In-service Stage Ongoing Review
When the weapon is deployed, there is a need for continuing human monitoring of reliability of its performance on actual missions and to gather information from these various missions so that a historical performance data bank can develop. This will help assess suitability for future missions and identify ongoing performance limitations or problems.
2. Ensuring Target Recognition Performs to Intended Function
The target recognition function of an autonomous weapon must be able to discriminate between legitimate and illegitimate targets. Aside from considering elements such as nature, location, purpose, and use, for human targets this may require an ability to interpret human behaviour in context. Whereas a human commander makes an assessment according to his reasonable belief, how is an autonomous weapon going to replicate such a reasonable belief?
80
It could rely on a databank of previous mission scenarios and performance to develop logic. To the extent that it uses this logic, its decisions and actions could be considered reasonable.
81
But the
77
A. Backstrom and I. Henderson, ‘New Capabilities in Warfare: An Overview of Contemporary Technological Developments and the Associated Legal and Engineering Issues in Article 36 Weapons Reviews’, Vol. 94 No. 886 International Review of the Red Cross 2012, p. 483, also
http://ssrn.com/abstract=2198826 p. 4, pp. 1–36, pp. 13–14.
Back
78
Id., p. 29.
Back
79
Id., p. 30.
Back
80
Id., p. 14.
Back
81
On the possibility of autonomous systems having an “internal state” of belief and goals, see G. Sartor, ‘Autonomous Robotic (Cyber) Weapons?’, Rome, 20 November 2013,
https://ccdcoe.org/publications/ethics/Sartor.pdf; G. Sartor, ‘Cognitive Automata and the Law’, EUI Working Paper LAW No. 2006/35.
Back
13
databank may not be reliable if the weapon is not exposed to different and varied scenarios or if the information is outdated. This suggests limiting use to known, set, and identifiable scenarios were the databank is reliable but which ultimately limits the weapons capability. It also suggests an integrated approach, one of human-weapon, may be the best way forward. Any human oversight is only legally and operationally useful if operators “provide a genuine review and do not simply trust the system’s output”.
82
In relation to targeting, commanders have a legal responsibility to ensure appropriate precautions are taken and, regardless of remoteness in time or space from the moment of attack, individual and State responsibility will attach to those who authorise use of an autonomous weapon system.
83
3. Ensuring Human Control over Weapon Release or Authorisation
The ability to de-escalate prior to lethal use of force is a safety mechanism preventing unlawful targeting and forms part of IHL precautionary principles. Human reflexive action may be quick to judge a given scenario as inappropriate for attack and thus pullback at the last minute. What would be the safety mechanism for an autonomous weapon? Unlike firing a bullet from a rifle where there is a period of time to reflect and the bullet may injure without necessarily killing, the “kill mode” of an autonomous weapon prevents any precautions or prior reflexive action to halt an attack. What would be the in-built reflective period for an autonomous weapon? Would it only have the capability to kill? These may not be issues for semi-autonomous weapons where there is human intervention to stop an attack or where attack is only by human authorisation. Backstrom and Henderson suggest using non-lethal weapons, such as directed energy weapons, as part of a controlled escalation of force.
84
4. Ensuring Correct Employment against Identified Enemy Targets
Ensuring the autonomous weapon’s correct employment against identified enemy targets relates to taking feasible precautions under Article 57 of API. Such feasible precautions include choice and methods of attack, the ability to abort an attack if there is expected excessive civilian casualties, verifying military objectives, taking “reasonable precautions” to avoid civilian casualties, and the availability of alternatives and use of non-lethal methods.
V. Conclusion
A common law pre-deployment duty of care exists during the manufacturing, design, and development stages of autonomous weapons owed to military personnel, and potentially enemy combatants and civilians. This duty derives from pre-deployment activities that are so far removed from the actual theatre of operation that there is time to deliberate, consider alternatives, and foresee harm. Taken at its broadest, any activity that involves time to think, plan and exercise judgment, and which is remote from actual combat may attract a duty of care. In Smith, the court made a clear distinction between non-justiciable active combat scenarios and justiciable pre-deployment scenarios. It was not prepared to accept an open-and-shut case of blanket combat immunity or that all decisions necessarily fall within active combat scenarios. The court was also clear that any pre-deployment duty should not impose an unrealistic or disproportionate burden on the authorities. Private actor involvement in autonomous weapons also establishes a pre-deployment due diligence duty under the OECD Guidelines and product liability, with the potential to open up litigation to enemy combatants and civilians suffering injury or death in foreign territories.
The pre-deployment duty is relevant to autonomous weapons where there are humanly-controllable issues of design capability, functionality, and operational and performance review. Four distinct pre-deployment duties arise. First, the duty during development and testing of autonomous weapons to ensure proper design and manufacture of autonomy levels and functionality. Specifically, ensuring properly functioning and IHL compliant autonomous ATSA functions, attack and abort mechanisms, and integration of a human review mechanism. Second, the duty to adequately and properly train human operators in the use of supporting technologies and equipment for autonomous weapons. With human-weapon integrated systems there needs to be clear communication and co-ordination, and a back-up system in place to suspend or abort operations in case the communication link is broken. Third, the duty to properly equip and provide with supporting
82
Backstrom and I. Henderson, supra note 77, p. 16.
Back
83
Id., p. 18.
Back
84
Id., p. 20.
Back
14
technologies, devices, and equipment connected with autonomous weapons. Finally, the duty to undertake an ongoing human review of autonomous weapons to ensure proper performance and functionality that is IHL compliant, taking account of modifications and additions made to new technology, and performance issues of “dynamic learning systems”.
States have a pre-deployment obligation under Article 36 of API to review new weapons for their compatibility with IHL. Unlike the common law pre-deployment duty of care offering legal recourse to individuals, the obligation under Article 36 is enforced by States. States must set up and implement their own weapons review systems, which although not subject to international supervision makes States responsible for any wrongful damage resulting from non-implementation. Before a new weapon is deployed, continuous assessment is required during development and testing to decide whether normal or expected use of the weapon is prohibited by API or any other rule of international law, including principles of legitimate targeting, proportionality and unnecessary suffering. As with the common law pre-deployment duty of care, similar obligations exist taking account of autonomous weapons’ lifecycle stages. At the demonstration stage, rigorous testing of weapons capability would need to ensure reliability of critical functions. Targeting, attack authorisation, and weapons effect should be IHL-compliant. At the manufacturing stage, a multi-disciplinary approach combining engineering, robotics, and legal expertise is necessary to ensure design reliability and IHL compliance. At the in-service stage, there is a need for continuing human monitoring of performance reliability on actual missions to assess suitability for future missions, and to identify ongoing performance limitations or problems.
15
Appendix: Duties and obligations in relation to autonomy levels and functions of autonomous weapons
|
|
|
|